6 research outputs found

    Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks

    Get PDF
    Modern deep learning models achieve state-of-the-art results for many tasks in computer vision, such as image classification and segmentation. However, its adoption into high-risk applications, e.g. automated medical diagnosis systems, happens at a slow pace. One of the main reasons for this is that regular neural networks do not capture uncertainty. To assess uncertainty in classification, several techniques have been proposed casting neural network approaches in a Bayesian setting. Amongst these techniques, Monte Carlo dropout is by far the most popular. This particular technique estimates the moments of the output distribution through sampling with different dropout masks. The output uncertainty of a neural network is then approximated as the sample variance. In this paper, we highlight the limitations of such a variance-based uncertainty metric and propose an novel approach. Our approach is based on the overlap between output distributions of different classes. We show that our technique leads to a better approximation of the inter-class output confusion. We illustrate the advantages of our method using benchmark datasets. In addition, we apply our metric to skin lesion classification-a real-world use case-and show that this yields promising results

    Data-efficient sensor upgrade path using knowledge distillation

    No full text
    Deep neural networks have achieved state-of-the-art performance in image classification. Due to this success, deep learning is now also being applied to other data modalities such as multispectral images, lidar and radar data. However, successfully training a deep neural network requires a large reddataset. Therefore, transitioning to a new sensor modality (e.g., from regular camera images to multispectral camera images) might result in a drop in performance, due to the limited availability of data in the new modality. This might hinder the adoption rate and time to market for new sensor technologies. In this paper, we present an approach to leverage the knowledge of a teacher network, that was trained using the original data modality, to improve the performance of a student network on a new data modality: a technique known in literature as knowledge distillation. By applying knowledge distillation to the problem of sensor transition, we can greatly speed up this process. We validate this approach using a multimodal version of the MNIST dataset. Especially when little data is available in the new modality (i.e., 10 images), training with additional teacher supervision results in increased performance, with the student network scoring a test set accuracy of 0.77, compared to an accuracy of 0.37 for the baseline. We also explore two extensions to the default method of knowledge distillation, which we evaluate on a multimodal version of the CIFAR-10 dataset: an annealing scheme for the hyperparameter alpha and selective knowledge distillation. Of these two, the first yields the best results. Choosing the optimal annealing scheme results in an increase in test set accuracy of 6%. Finally, we apply our method to the real-world use case of skin lesion classification

    Enhanced visualization of blood and pigment in multispectral skin dermoscopy

    No full text
    Background and objectives Dermoscopy has proven its value in the diagnosis of skin cancer and, therefore, is well established in daily dermatology practice. Up until now, analogue white light dermoscopy is the standard. Multispectral dermoscopy is based on illumination of the skin with narrowband light sources with different wavelengths. Each of these wavelengths is differently absorbed by skin chromophores, such as pigment or (de)oxygenated blood. Multispectral dermoscopy could be a way to enhance the visualization of vasculature and pigment. We illustrate possible additional information by such "skin parameter maps" in some cases of basal cell carcinoma and Bowen's disease. Methods Using a new digital multispectral dermatoscope, skin images at multiple wavelengths are collected from different types of skin lesions. These particular images together with the knowledge on skin absorption properties, result in so called "skin parameter maps". Results A "pigment contrast map," which shows the relative concentration of primarily pigment, and a "blood contrast map" which shows the relative concentration of primarily blood were created. Especially, the latter is of importance in diagnosing keratinocyte skin cancer hence vascular structures are a characteristic feature, as further illustrated in the study. Conclusions Skin parameter maps based on multispectral images can give better insight in the inner structures of lesions, especially in lesions with characteristic blood vessels such as Bowen's disease and basal cell carcinoma. Skin parameter maps can be used complementary to regular dermoscopy and could potentially facilitate diagnosing skin lesions
    corecore